808 research outputs found

    Comprehensive Evaluation of Environmental Policy for Water Pollutants and Greenhouse Gases Reduction in Jiaxing city, China

    Get PDF
    Recently, various environmental problems have been generated with the rapid economic development in China. That’s because China currently over-emphasized economic development beyond environmental issues; therefore, now it is important to enforce optimal environmental policies in order to achieve economic development as well as environmental improvement. In this study, we selected Jiaxing city as research area for that the environmental pollution problem has become prominent with economic high growth, and we constructed environmental system model and social economic model to establish the scenarios. Through computer simulation, we can evaluate the efficiency of the comprehensive environmental policies from both environmental preservation and social economic development aspects. While the social-economic model shows the socioeconomic activities which are vital events, fortune and service, such as production, finance and budget; the environmental system model shows the water pollutants and the greenhouse gas movement in the region. The dynamic optimization simulation is accomplished based on this environmental and socio-economic system model. In view of the restriction on water pollutants, greenhouse gas total exhausted amount, and economic activity in the catchment area, the simulation we practiced can provide concrete inner-generating optimal policies which can achieve the best economic and environment improvement with the consideration of policy, regional and timing choice in Jiaxing City, China.

    Identification and Estimation of Causal Effects Using non-Gaussianity and Auxiliary Covariates

    Full text link
    Assessing causal effects in the presence of unmeasured confounding is a challenging problem. Although auxiliary variables, such as instrumental variables, are commonly used to identify causal effects, they are often unavailable in practice due to stringent and untestable conditions. To address this issue, previous researches have utilized linear structural equation models to show that the causal effect can be identifiable when noise variables of the treatment and outcome are both non-Gaussian. In this paper, we investigate the problem of identifying the causal effect using auxiliary covariates and non-Gaussianity from the treatment. Our key idea is to characterize the impact of unmeasured confounders using an observed covariate, assuming they are all Gaussian. The auxiliary covariate can be an invalid instrument or an invalid proxy variable. We demonstrate that the causal effect can be identified using this measured covariate, even when the only source of non-Gaussianity comes from the treatment. We then extend the identification results to the multi-treatment setting and provide sufficient conditions for identification. Based on our identification results, we propose a simple and efficient procedure for calculating causal effects and show the n\sqrt{n}-consistency of the proposed estimator. Finally, we evaluate the performance of our estimator through simulation studies and an application.Comment: 16 papges, 7 Figure

    PCR: Proxy-based Contrastive Replay for Online Class-Incremental Continual Learning

    Full text link
    Online class-incremental continual learning is a specific task of continual learning. It aims to continuously learn new classes from data stream and the samples of data stream are seen only once, which suffers from the catastrophic forgetting issue, i.e., forgetting historical knowledge of old classes. Existing replay-based methods effectively alleviate this issue by saving and replaying part of old data in a proxy-based or contrastive-based replay manner. Although these two replay manners are effective, the former would incline to new classes due to class imbalance issues, and the latter is unstable and hard to converge because of the limited number of samples. In this paper, we conduct a comprehensive analysis of these two replay manners and find that they can be complementary. Inspired by this finding, we propose a novel replay-based method called proxy-based contrastive replay (PCR). The key operation is to replace the contrastive samples of anchors with corresponding proxies in the contrastive-based way. It alleviates the phenomenon of catastrophic forgetting by effectively addressing the imbalance issue, as well as keeps a faster convergence of the model. We conduct extensive experiments on three real-world benchmark datasets, and empirical results consistently demonstrate the superiority of PCR over various state-of-the-art methods.Comment: To appear in CVPR 2023. 10 pages, 8 figures and 3 table

    MetaNODE: Prototype Optimization as a Neural ODE for Few-Shot Learning

    Full text link
    Few-Shot Learning (FSL) is a challenging task, \emph{i.e.}, how to recognize novel classes with few examples? Pre-training based methods effectively tackle the problem by pre-training a feature extractor and then predicting novel classes via a cosine nearest neighbor classifier with mean-based prototypes. Nevertheless, due to the data scarcity, the mean-based prototypes are usually biased. In this paper, we attempt to diminish the prototype bias by regarding it as a prototype optimization problem. To this end, we propose a novel meta-learning based prototype optimization framework to rectify prototypes, \emph{i.e.}, introducing a meta-optimizer to optimize prototypes. Although the existing meta-optimizers can also be adapted to our framework, they all overlook a crucial gradient bias issue, \emph{i.e.}, the mean-based gradient estimation is also biased on sparse data. To address the issue, we regard the gradient and its flow as meta-knowledge and then propose a novel Neural Ordinary Differential Equation (ODE)-based meta-optimizer to polish prototypes, called MetaNODE. In this meta-optimizer, we first view the mean-based prototypes as initial prototypes, and then model the process of prototype optimization as continuous-time dynamics specified by a Neural ODE. A gradient flow inference network is carefully designed to learn to estimate the continuous gradient flow for prototype dynamics. Finally, the optimal prototypes can be obtained by solving the Neural ODE. Extensive experiments on miniImagenet, tieredImagenet, and CUB-200-2011 show the effectiveness of our method.Comment: Accepted by AAAI 202

    Leveraging Key Information Modeling to Improve Less-Data Constrained News Headline Generation via Duality Fine-Tuning

    Full text link
    Recent language generative models are mostly trained on large-scale datasets, while in some real scenarios, the training datasets are often expensive to obtain and would be small-scale. In this paper we investigate the challenging task of less-data constrained generation, especially when the generated news headlines are short yet expected by readers to keep readable and informative simultaneously. We highlight the key information modeling task and propose a novel duality fine-tuning method by formally defining the probabilistic duality constraints between key information prediction and headline generation tasks. The proposed method can capture more information from limited data, build connections between separate tasks, and is suitable for less-data constrained generation tasks. Furthermore, the method can leverage various pre-trained generative regimes, e.g., autoregressive and encoder-decoder models. We conduct extensive experiments to demonstrate that our method is effective and efficient to achieve improved performance in terms of language modeling metric and informativeness correctness metric on two public datasets.Comment: Accepted by AACL-IJCNLP 2022 main conferenc
    • …
    corecore